Notes for Fodor & Pylyshyn's "Connectionism and Cognitive Architecture”

The toughest part is getting straight on what the dispute is over. As a first pass, the issue is whether we think in a ‘language of thought’. But what exactly does this mean? In a language, one has symbols that represent things. The word ‘cat’ represents cats, and the word ‘Vermont’ represents Vermont. So there are some physical states, such as patterns of ink on a page or patterns of lit up pixels on a computer monitor that are somehow about various things in the world. This is what is meant by calling these ‘intentional’ states. Both parties to the current dispute think that there are states of our brains — certain firing patterns of neurons or whatever it might be — that also represent things out there in the world. The issue, however, concerns whether these mental representations have a compositional structure just as linguistic representations do.

What does it mean for a representation to have compositional structure? Well, to begin with, this means that there are some representations that are atomic, while others (‘molecular’) ones are built up out of atomic ones. In English, there’s a word that represents me, namely, the word ‘Mark’. This inscription does have parts, in that it composed of four letters, but this is not a semantically compositional structure, as we will see. Here, though, is another inscription that does have semantically compositional structure: “Mark loves Celeste.” This inscription is composed of three parts that are semantically basic (atomic), namely the words ‘Mark’, ‘loves’, and ‘Celeste’, and each of these parts represents something (in this case: me, the relation of loving, and my daughter).

English has rules of syntax that govern how the parts (the words) of a complex inscription (a sentence) are put together. For example, we put adjectives before the nouns they qualify. Thus we write ‘brown dog’ instead of ‘dog brown’. And we put the subject of a verb before the verb and the object of the verb after the verb. That is why the inscription “Mark loves Celeste” is a different inscription than “Celeste loves Mark” (in some languages, word order doesn’t matter so much). Actually, the syntactic rules of English are extremely complex — much more complex than these two (not quite accurate) examples would indicate. Notice, though, that these are rules for forming a more complex inscription from its parts, and such rules could take all sorts of forms. For example, you could have a rule that says that when you are qualifying a noun by an adjective you write the adjective at 90 degrees to the noun and change the first letter of the noun to the next letter in the alphabet. With this rule one would express the idea of a brown dog by:

b

r

o

w

n   eog

Whatever the rules are, they give us syntactic compositionality: the physical states (e.g., inscriptions) that are complex representations are built up from the physical states that are atomic representations according to rules.

Together with this, though, is semantic compositionality: the meaning of a complex representation is built up from the meanings of the atomic representations of which it is composed and the way in which they are joined together by the syntactic rules. Thus, because in English an adjective that restricts a noun is placed before it, the meaning of ‘brown dog’ will be those dogs that are brown. As long as you know the meanings of the atomic parts and you know the rules for how they are joined together, you will be able to understand the meaning of the resulting complex expression.

So far we’ve just talked about linguistic representations, but the Language of Thought hypothesis says that the same thing holds for mental representations. So according to this hypothesis, there are some atomic representations in our brains. That is, there is some state of your brain that represents, for example, Bush. The Language of Thought (LOT) hypothesis has nothing to say about how these atomic representations actually succeed in representing what they represent. It just says that there are such mental states. But what it does say is that there are also some other, more complex mental representations (e.g., a state of your brain representing that Bush was president) that represent more complex things by somehow having as parts the atomic representations, and the way these atomic mental representations are joined together to form a complex mental representation is according to some sort of rules such that the meaning of a complex representations is a function of a) the meanings of the atomic representations of which it is composed and b) the rules by which they are composed. This is the thesis Fodor and Pylyshyn are trying to defend, whereas Connectionist theories, at least according to Fodor and Pylyshyn, say that our minds are connectionist networks that represent things without having this sort of compositionality.